36 research outputs found

    Human Breathing Rate Estimation from Radar Returns Using Harmonically Related Filters

    Get PDF
    Radar-based noncontact sensing of life sign signals is often used in safety and rescue missions during disasters such as earthquakes and avalanches and for home care applications. The radar returns obtained from a human target contain the breathing frequency along with its strong higher harmonics depending on the target’s posture. As a consequence, well understood, computationally efficient, and the most popular traditional FFT-based estimators that rely only on the strongest peak for estimates of breathing rates may be inaccurate. The paper proposes a solution for correcting the estimation errors of such single peak-based algorithms. The proposed method is based on using harmonically related comb filters over a set of all possible breathing frequencies. The method is tested on three subjects for different postures, for different distances between the radar and the subject, and for two different radar platforms: PN-UWB and phase modulated-CW (PM-CW) radars. Simplified algorithms more suitable for real-time implementation have also been proposed and compared using accuracy and computational complexity. The proposed breathing rate estimation algorithms provide a reduction of about 81% and 80% in the mean absolute error of breathing rates in comparison to the traditional FFT-based methods using strongest peak detection, for PN-UWB and PM-CW radars, respectively

    A PPG-Based Calibration-Free Cuffless Blood Pressure Estimation Method Using Cardiovascular Dynamics

    No full text
    Traditional cuff-based sphygmomanometers for measuring blood pressure can be uncomfortable and particularly unsuitable to use during sleep. A proposed alternative method uses dynamic changes in the pulse waveform over short intervals and replaces calibration with information from photoplethysmogram (PPG) morphology to provide a calibration-free approach using a single sensor. Results from 30 patients show a high correlation of 73.64% for systolic blood pressure (SBP) and 77.72% for diastolic blood pressure (DBP) between blood pressure estimated with the PPG morphology features and the calibration method. This suggests that the PPG morphology features could replace the calibration stage for a calibration-free method with similar accuracy. Applying the proposed methodology on 200 patients and testing on 25 new patients resulted in a mean error (ME) of −0.31 mmHg, a standard deviation of error (SDE) of 4.89 mmHg, a mean absolute error (MAE) of 3.32 mmHg for DBP and an ME of −4.02 mmHg, an SDE of 10.40 mmHg, and an MAE of 7.41 mmHg for SBP. These results support the potential for using a PPG signal for calibration-free cuffless blood pressure estimation and improving accuracy by adding information from cardiovascular dynamics to different methods in the cuffless blood pressure monitoring field

    Cuffless Blood Pressure Estimation Using Calibrated Cardiovascular Dynamics in the Photoplethysmogram

    No full text
    An important means for preventing and managing cardiovascular disease is the non-invasive estimation of blood pressure. There is particular interest in developing approaches that provide accurate cuffless and continuous estimation of this important vital sign. This paper proposes a method that uses dynamic changes of the pulse waveform over short time intervals and calibrates the system based on a mathematical model that relates reflective PTT (R-PTT) to blood pressure. An advantage of the method is that it only requires collecting the photoplethysmogram (PPG) using one optical sensor, in addition to initial non-invasive measurements of blood pressure that are used for calibration. This method was applied to data from 30 patients, resulting in a mean error (ME) of 0.59 mmHg, a standard deviation of error (SDE) of 7.07 mmHg, and a mean absolute error (MAE) of 4.92 mmHg for diastolic blood pressure (DBP) and an ME of 2.52 mmHg, an SDE of 12.15 mmHg, and an MAE of 8.89 mmHg for systolic blood pressure (SBP). These results demonstrate the possibility of using the PPG signal for the cuffless continuous estimation of blood pressure based on the analysis of calibrated changes in cardiovascular dynamics, possibly in conjunction with other methods that are currently being researched

    Classification of English vowels using speech evoked potentials

    No full text
    The objective of this study is to investigate whether Speech Evoked Potentials (SpEPs), which are auditory brainstem responses to speech stimuli, contain information that can be used to distinguish different speech stimuli. Previous studies on brainstem SpEPs show that they contain valuable information about auditory neural processing. As such, SpEPs may be useful for the diagnosis of central auditory processing disorders and language disability, particularly in children. In this work, we examine the spectral amplitude information of both the Envelope Following Response, which is dominated by spectral components at the fundamental (F0) and its harmonics, and Frequency Following Response, which is dominated by spectral components in the region of the first formant (F1), of SpEPs in response to the five English language vowels (\a\,\e\,\ae\,\i\,\u\). Using spectral amplitude features, a classification accuracy of 78.3% is obtained with a linear discriminant analysis classifier. Classification of SpEPs demonstrates that brainstem neural responses in the region of F0 and F1 contain valuable information for discriminating vowels. This result provides an insight into human auditory processing of speech, and may help develop improved methods for objectively assessing central hearing impairment

    Classification of English vowels using speech evoked potentials.

    No full text
    The objective of this study is to investigate whether Speech Evoked Potentials (SpEPs), which are auditory brainstem responses to speech stimuli, contain information that can be used to distinguish different speech stimuli. Previous studies on brainstem SpEPs show that they contain valuable information about auditory neural processing. As such, SpEPs may be useful for the diagnosis of central auditory processing disorders and language disability, particularly in children. In this work, we examine the spectral amplitude information of both the Envelope Following Response, which is dominated by spectral components at the fundamental (F0) and its harmonics, and Frequency Following Response, which is dominated by spectral components in the region of the first formant (F1), of SpEPs in response to the five English language vowels (\a\,\e\,\ae\,\i\,\u\). Using spectral amplitude features, a classification accuracy of 78.3% is obtained with a linear discriminant analysis classifier. Classification of SpEPs demonstrates that brainstem neural responses in the region of F0 and F1 contain valuable information for discriminating vowels. This result provides an insight into human auditory processing of speech, and may help develop improved methods for objectively assessing central hearing impairment

    Classification of speech-evoked brainstem responses to English vowels

    No full text
    This study investigated whether speech-evoked auditory brainstem responses (speech ABRs) can be automatically separated into distinct classes. With five English synthetic vowels, the speech ABRs were classified using linear discriminant analysis based on features contained in the transient onset response, the sustained envelope following response (EFR), and the sustained frequency following response (FFR). EFR contains components mainly at frequencies well below the first formant, while the FFR has more energy around the first formant. Accuracies of 83.33% were obtained for combined EFR and FFR features and 38.33% were obtained for transient response features. The EFR features performed relatively well with a classification accuracy of 70.83% despite the belief that vowel discrimination is primarily dependent on the formants. The FFR features obtained a lower accuracy of 59.58% possibly because the second formant is not well represented in all the responses. Moreover, the classification accuracy based on the transient features exceeded chance level which indicates that the initial response transients contain vowel specific information. The results of this study will be useful in a proposed application of speech ABR to objective hearing aid fitting, if the separation of the brain's responses to different vowels is found to be correlated with perceptual discrimination

    Prosody perception in hearing loss (Karimi-Boroujeni et al., 2023)

    No full text
    Purpose: Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. Method: Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. Results: The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. Conclusions: The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. Supplemental Material S1. Features of the assessment tools evaluating prosody perception. Supplemental Material S2. Summary of studies focused on prosody perception in hearing aid and cochlear implant users. Karimi-Boroujeni, M., Dajani, H. R., & Giguère, C. (2023). Perception of prosody in hearing-impaired individuals and users of hearing assistive devices: An overview of recent advances. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2022_JSLHR-22-00125</p

    Comparison of direct measurement methods for headset noise exposure in the workplace

    No full text
    The measurement of noise exposure from communication headsets poses a methodological challenge. Although several standards describe methods for general noise measurements in occupational settings, these are not directly applicable to noise assessments under communication headsets. For measurements under occluded ears, specialized methods have been specified by the International Standards Organization (ISO 11904) such as the microphone in a real ear and manikin techniques. Simpler methods have also been proposed in some national standards such as the use of general purpose artificial ears and simulators in conjunction with single number corrections to convert measurements to the equivalent diffuse field. However, little is known about the measurement agreement between these various methods and the acoustic manikin technique. Twelve experts positioned circum-aural, supra-aural and insert communication headsets on four different measurement setups (Type 1, Type 2, Type 3.3 artificial ears, and acoustic manikin). Fit-refit measurements of four audio communication signals were taken under quiet laboratory conditions. Data were transformed into equivalent diffuse-field sound levels using third-octave procedures. Results indicate that the Type 1 artificial ear is not suited for the measurement of sound exposure under communication headsets, while Type 2 and Type 3.3 artificial ears are in good agreement with the acoustic manikin technique. Single number corrections were found to introduce a large measurement uncertainty, making the use of the third-octave transformation preferable
    corecore